Accelerated Forward-Backward Optimization using Deep Learning

Jevgenija Rudzusika (KTH Stockholm)

20-Jul-2021, 10:15-11:45 (4 years ago)

Abstract: We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update. Finally, we show that the method is applicable to several cases of smooth and non-smooth optimization and show superior results to established accelerated solvers.

machine learningnumerical analysisoptimization and control

Audience: researchers in the topic


Mathematics of Deep Learning

Series comments: Please fill out the following form for registering for our email list, where talk announcements and zoom details are distributed: docs.google.com/forms/d/e/1FAIpQLSeWAzBXsXRqpJhHDKODywySl_BWZN-Cbrik_4bEun2fGwhOKg/viewform?usp=sf_link

Slides: drive.google.com/drive/folders/1w9lNCGWZyzGFxxuVvhJOcjlc92X2toJg?usp=sharing

Videos: www.fau.tv/course/id/878

Organizers: Leon Bungert*, Daniel Tenbrinck
Curator: Martin Burger
*contact for this listing

Export talk to